#HelloWorld. The days are now shorter, but this issue is longer. President Biden's October 30th Executive Order deserves no less. Plus, the UK AI Safety Summit warrants a drop-by, and three copyright and right-to-publicity theories come under a judicial microscope. Read on to catch up. Let's stay smart together. (Subscribe to the mailing list to receive future issues.)

Mission: Executive Order.  The White House grabbed most of the AI headlines over the last two weeks after President Biden on October 30 signed his “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” (If staff reports are to be believed, the villainous “Entity” AI in Mission: Impossible – Dead Reckoning, Part One had something to do with it).

The order itself is a sprawling, dense document, with applicability across various economic sectors from energy to healthcare to finance. Here's the gist:

  • The order calls for a “coordinated, Federal Government-wide approach” to studying and regulating AI, issuing instructions to at least 20 federal agencies and departments—in approximate order of appearance: Commerce, NIST, Energy, Homeland Security, NSF, State, Defense, Treasury, OMB, HHS, Labor, the USPTO, the Copyright Office, the FTC, the DOJ, Agriculture, Veterans Affairs, Transportation, Education, and the FCC.
  • Further guidance will be forthcoming from these agencies within the next year, at intervals ranging from five to 12 months, depending on the agency. NIST, for example, is tasked with a prominent role in section 4 of the order: Over the next nine months, it is to establish guidelines, best practices, and testing environments for auditing, evaluating, and “red teaming” AI models. (If you have questions about the specifics of how the order applies to your industry, please don't hesitate to reach out).
  • The order is especially concerned in the near-term with what it calls “dual-use foundation models.” These are defined, in section 3(k), as covering the largest models (“at least tens of billions of parameters”) that also “pose a serious risk” in three core areas: (1) “chemical, biological, radiological, or nuclear” weapons; (2) “powerful offensive cyber operations”; and (3) “evasion of human control or oversight through means of deception.”
  • Companies intending to develop “potential dual-use foundation models” will have to report their plans to the federal government, including the results of any “red-team testing” (done to mitigate the risk of something going wrong). But these requirements cover only the largest of models—those even larger than today's frontier models, like OpenAI's GPT-4 and Anthropic's Claude.
  • Here at The AI Update, we've been predicting since the spring the eventual widespread adoption of requirements to identify and label AI-synthetized content. The order continues that trend: Section 4.5 calls for the Commerce Department to develop guidance for “digital content authentication and synthetic content detection measures,” including digital “watermarking.”
  • Finally, for aficionados of contract definitions, section 3 of the order has some compact-yet-decently-accurate definitions of AI-specific terms like “artificial intelligence,” “generative AI,” “AI model,” “AI red-teaming,” and “model weight.” You may want to borrow them for your own tech contracts.

The UK Safety Summit. President Biden's Executive Order scooped the recent UK AI Safety Summit—billed as “the first global AI Safety Summit”—by two days. The meeting, held November 1-2, produced the Bletchley Declaration on AI Safety, which includes the UK, US, China, Japan, India, and EU constituents among its 28 signatories. The declaration is similar in purpose and theme to the voluntary commitment 7 leading AI developers signed onto earlier this year in the US, covered in a prior issue of The AI Update. We remain hopeful that these voluntary initial undertakings are a road to somewhere more specific. Time will tell. South Korea and France are scheduled to host the next two summits, in 2024.

Some legal guidance in AI litigation. Back in the US, the federal case brought by a group of artists against Stability AI, the maker of the popular text-to-image Stable Diffusion model, provided some interesting insights. The Northern District of California judge overseeing the lawsuit issued his opinion dismissing most of the claims for now, but giving the artists an opportunity to try again. Here are three core takeaways:

  • As expected, the court allowed the artists to go forward on the theory that the use of their registered copyrighted works to train the Stable Diffusion model violated the artists' copyright interests. These “training data” claims are the most popular theory of infringement in the current crop of generative AI cases.
  • On the flip side, the court dealt a blow to one of the artists' most ambitious copyright theories: that every output Stable Diffusion synthetizes is, by definition, an infringing “derivative work” of copyrighted images in the training data. The court seemed persuaded that a generated output must still be substantially similar to a copyrighted work (the classic test for copyright infringement) before it is deemed a “derivative work.”
  • Lastly, for the right-to-publicity claim, the court stressed that the named plaintiff artists had to show that Stability AI used their  individual names—not the names of other artists or artistic styles generally—to advertise or promote the Stable Diffusion service. That kind of showing is a big and difficult task in most cases.

What we're reading:  A start-up named Vectara recently published a “ Hallucination Leaderboard,” attempting to evaluate how often the leading large language models hallucinate under simulated conditions. Vectara prompted each LLM to summarize 1,000 short news documents and counted how made-up facts were included. The winner? GPT-4, with a hallucination rate of 3%. The worst performer? Google's Palm-Chat, with a 27% rate. But that was an outlier: Most of the models tested had rates in the mid-to-high single digits. Still, in the knowledge business, even a single-digit percentage of factual inaccuracy may not be good enough.

What should  we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We'd love to hear from you and continue the conversation.

Editor-in-Chief Alex Goranin

Deputy Editors: Matt Mousley and Tyler Marandola

If you were forwarded this newsletter,  subscribe to the mailing list to receive future issues.

Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.